Okay. So welcome everyone. Today we have Professor Albert Cohen from the laboratory Jacques-Louis
Lyon, Sorbonne University. And the talk, I'm not sure if the title is a question or...
Yeah, okay. Yeah. Okay. Yeah. Optimal sampling in this squares method application
to PDE is an inverse problem. Please, Professor Kroh, when you have the floor.
Thank you very much. So thank you, Marius. Thank you for the invitation and also to
Enrique. Yeah, I added a question mark because this... I'm going to review a number of
results on what I call optimal sampling. Everything will become hopefully clear
soon. And the end on applications to PDE and inverse problem is a bit speculative. I have students,
Mathieu Dolgaux, Augustin Somakar, working on this currently. And I prefer to put a question mark here.
Okay, so before I move on, I want... I would like that this is as best as possible interactive.
So feel free, I don't know, to interrupt, put your mic on when something is unclear.
So, don't hesitate to stop me. This is what I want to say. So I start and now I can see that I can...
Oh yeah. So I would like to start with something which is ubiquitous in a numerical method
referring to the task of recovering an unknown function of several variables
from its observation at certain sample points. So D is a domain in RD and each of these Xi for i
equal one to m is a point in this domain. And we measure u at these points. And this is what we
call yi, but this could be noisy observation. So yi could not exactly be equal to u of Xi.
This of course comes out in many, many contexts, applicative contexts. But I would like to delineate
between first two very different settings. What I call the passive setting is the setting where we
don't choose the Xi. They are sort of drawn and given to us and we don't have so much control.
And the active setting where we decide, where we query this function u. So this is of course
a more favorable setting, but it gives us also a big responsibility on where we should query.
And to be a bit more specific, so the typical setting description of the passive acquisition
setting I would say is in regression machine learning. Typically think of an input output pair
modeled by a random variable xxy where x is in D, the input, and y is in R, the output. And there
is in the background an unknown joint law. And what you observe is random independent realization
of these variables. So these pairs Xi, Yi are such realization. And what you search for
is a function that best explains the output from the input. So typically you don't have any
determinism here. You could have for a single Xi different output possibility of output Yi.
And think for example x would be a feature vector that represents for a bank the age, the income
of a client, and y would be the way the client reimburses loan. If it's good or if it's a good
client or if it's a bad client measured by in some way. Well of course it doesn't only depend
on the age and the income. There are many hidden factors and this is why you are really in a
stochastic context. So now you would like to find something which makes y close to v of x.
And if you measure closeness in the sense of the quadratic expectation, the quadratic loss,
of course what minimizes this, the best choice for v, would be the conditional expectation
of y conditional to x. This is a function u of x, now fixed function which is called the regression
function, and which is of course unknown to us. If u tilde is something that you reconstruct from
your data and if you look at this risk, then just by Pythagoras' p.o.m. you see that the risk you
have is the minimum risk, the one coming from the one that is achieved by the regression function,
plus this expectation which is in fact an L2 norm of u minus u tilde with respect to the probability
distribution of the input variable x. So the natural way to measure performance of any
reconstruction u is to ask that it's as small as possible that the error in this norm, L2 of d
with respect to this probability measure mu, is as small as possible. And you can think of this as
an inherently noisy setting, again this yi you can view them as u of xi plus some term.
To activate, okay can you hear me again? Okay so you can think in that case that this yi
are u of xi plus something which has expectation zero because this is a conditional expectation.
So think of this it as a noise, okay so you are typically in a noisy set. This is a passive
acquisition setting and because of this you can view also this problem as a denoising problem
where you try to approach this function u from this yi. All right now let's talk about the active
Zugänglich über
Offener Zugang
Dauer
01:22:59 Min
Aufnahmedatum
2021-03-24
Hochgeladen am
2021-03-24 16:07:11
Sprache
en-US